risk and benefit
Demystify, Use, Reflect: Preparing students to be informed LLM-users
Chandrashekar, Nikitha Donekal, Nizamani, Sehrish Basir, Ellis, Margaret, Ramakrishnan, Naren
We transitioned our post-CS1 course that introduces various subfields of computer science so that it integrates Large Language Models (LLMs) in a structured, critical, and practical manner. It aims to help students develop the skills needed to engage meaningfully and responsibly with AI. The course now includes explicit instruction on how LLMs work, exposure to current tools, ethical issues, and activities that encourage student reflection on personal use of LLMs as well as the larger evolving landscape of AI-assisted programming. In class, we demonstrate the use and verification of LLM outputs, guide students in the use of LLMs as an ingredient in a larger problem-solving loop, and require students to disclose and acknowledge the nature and extent of LLM assistance. Throughout the course, we discuss risks and benefits of LLMs across CS subfields. In our first iteration of the course, we collected and analyzed data from students pre and post surveys. Student understanding of how LLMs work became more technical, and their verification and use of LLMs shifted to be more discerning and collaborative. These strategies can be used in other courses to prepare students for the AI-integrated future.
- North America > United States > New York > New York County > New York City (0.07)
- North America > United States > Virginia (0.06)
- Europe > Italy > Lombardy > Milan (0.06)
- (3 more...)
- Research Report (0.64)
- Instructional Material > Course Syllabus & Notes (0.47)
Misalignments in AI Perception: Quantitative Findings and Visual Mapping of How Experts and the Public Differ in Expectations and Risks, Benefits, and Value Judgments
Brauner, Philipp, Glawe, Felix, Liehner, Gian Luca, Vervier, Luisa, Ziefle, Martina
Artificial Intelligence (AI) is transforming diverse societal domains, raising critical questions about its risks and benefits and the misalignments between public expectations and academic visions. This study examines how the general public (N=1110) -- people using or being affected by AI -- and academic AI experts (N=119) -- people shaping AI development -- perceive AI's capabilities and impact across 71 scenarios, including sustainability, healthcare, job performance, societal divides, art, and warfare. Participants evaluated each scenario on four dimensions: expected probability, perceived risk and benefit, and overall sentiment (or value). The findings reveal significant quantitative differences: experts anticipate higher probabilities, perceive lower risks, report greater utility, and express more favorable sentiment toward AI compared to the non-experts. Notably, risk-benefit tradeoffs differ: the public assigns risk half the weight of benefits, while experts assign it only a third. Visual maps of these evaluations highlight areas of convergence and divergence, identifying potential sources of public concern. These insights offer actionable guidance for researchers and policymakers to align AI development with societal values, fostering public trust and informed governance.
- North America > United States (0.93)
- Europe (0.68)
- Asia (0.67)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Questionnaire & Opinion Survey (1.00)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (1.00)
- (2 more...)
Mapping Public Perception of Artificial Intelligence: Expectations, Risk-Benefit Tradeoffs, and Value As Determinants for Societal Acceptance
Brauner, Philipp, Glawe, Felix, Liehner, Gian Luca, Vervier, Luisa, Ziefle, Martina
Understanding public perception of artificial intelligence (AI) and the tradeoffs between potential risks and benefits is crucial, as these perceptions might shape policy decisions, influence innovation trajectories for successful market strategies, and determine individual and societal acceptance of AI technologies. Using a representative sample of 1100 participants from Germany, this study examines mental models of AI. Participants quantitatively evaluated 71 statements about AI's future capabilities (e.g., autonomous driving, medical care, art, politics, warfare, and societal divides), assessing the expected likelihood of occurrence, perceived risks, benefits, and overall value. We present rankings of these projections alongside visual mappings illustrating public risk-benefit tradeoffs. While many scenarios were deemed likely, participants often associated them with high risks, limited benefits, and low overall value. Across all scenarios, 96.4% ($r^2=96.4\%$) of the variance in value assessment can be explained by perceived risks ($\beta=-.504$) and perceived benefits ($\beta=+.710$), with no significant relation to expected likelihood. Demographics and personality traits influenced perceptions of risks, benefits, and overall evaluations, underscoring the importance of increasing AI literacy and tailoring public information to diverse user needs. These findings provide actionable insights for researchers, developers, and policymakers by highlighting critical public concerns and individual factors essential to align AI development with individual values.
- Oceania > Australia (0.14)
- Europe > Germany > North Rhine-Westphalia > Cologne Region > Aachen (0.04)
- South America > Brazil (0.04)
- (11 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (1.00)
- (4 more...)
Predicting Trust In Autonomous Vehicles: Modeling Young Adult Psychosocial Traits, Risk-Benefit Attitudes, And Driving Factors With Machine Learning
Kaufman, Robert, Lee, Emi, Bedmutha, Manas Satish, Kirsh, David, Weibel, Nadir
Low trust remains a significant barrier to Autonomous Vehicle (AV) adoption. To design trustworthy AVs, we need to better understand the individual traits, attitudes, and experiences that impact people's trust judgements. We use machine learning to understand the most important factors that contribute to young adult trust based on a comprehensive set of personal factors gathered via survey (n = 1457). Factors ranged from psychosocial and cognitive attributes to driving style, experiences, and perceived AV risks and benefits. Using the explainable AI technique SHAP, we found that perceptions of AV risks and benefits, attitudes toward feasibility and usability, institutional trust, prior experience, and a person's mental model are the most important predictors. Surprisingly, psychosocial and many technology- and driving-specific factors were not strong predictors. Results highlight the importance of individual differences for designing trustworthy AVs for diverse groups and lead to key implications for future design and research.
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- North America > United States > California > San Diego County > San Diego (0.05)
- North America > United States > California > San Diego County > La Jolla (0.04)
- (8 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.93)
- Transportation > Passenger (1.00)
- Transportation > Ground > Road (1.00)
- Information Technology > Security & Privacy (1.00)
- (5 more...)
Information That Matters: Exploring Information Needs of People Affected by Algorithmic Decisions
Schmude, Timothée, Koesten, Laura, Möller, Torsten, Tschiatschek, Sebastian
Explanations of AI systems rarely address the information needs of people affected by algorithmic decision-making (ADM). This gap between conveyed information and information that matters to affected stakeholders can impede understanding and adherence to regulatory frameworks such as the AI Act. To address this gap, we present the "XAI Novice Question Bank": A catalog of affected stakeholders' information needs in two ADM use cases (employment prediction and health monitoring), covering the categories data, system context, system usage, and system specifications. Information needs were gathered in an interview study where participants received explanations in response to their inquiries. Participants further reported their understanding and decision confidence, showing that while confidence tended to increase after receiving explanations, participants also met understanding challenges, such as being unable to tell why their understanding felt incomplete. Explanations further influenced participants' perceptions of the systems' risks and benefits, which they confirmed or changed depending on the use case. When risks were perceived as high, participants expressed particular interest in explanations about intention, such as why and to what end a system was put in place. With this work, we aim to support the inclusion of affected stakeholders into explainability by contributing an overview of information and challenges relevant to them when deciding on the adoption of ADM systems. We close by summarizing our findings in a list of six key implications that inform the design of future explanations for affected stakeholder audiences.
- North America > United States > New York > New York County > New York City (0.05)
- Europe > Austria > Vienna (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- (19 more...)
- Government (1.00)
- Education (1.00)
- Law (0.87)
- (2 more...)
Regulating AI: 3 experts explain why it's difficult to do and important to get right
From fake photos of Donald Trump being arrested by New York City police officers to a chatbot describing a very-much-alive computer scientist as having died tragically, the ability of the new generation of generative artificial intelligence systems to create convincing but fictional text and images is setting off alarms about fraud and misinformation on steroids. Indeed, a group of artificial intelligence researchers and industry figures urged the industry on March 29, 2023, to pause further training of the latest AI technologies or, barring that, for governments to "impose a moratorium." These technologies – image generators like DALL-E, Midjourney and Stable Diffusion, and text generators like Bard, ChatGPT, Chinchilla and LLaMA – are now available to millions of people and don't require technical knowledge to use. Given the potential for widespread harm as technology companies roll out these AI systems and test them on the public, policymakers are faced with the task of determining whether and how to regulate the emerging technology. The Conversation asked three experts on technology policy to explain why regulating AI is such a challenge – and why it's so important to get it right.
- North America > United States > New York (0.25)
- North America > United States > California > Los Angeles County > Los Angeles (0.15)
- North America > United States > Texas (0.05)
- Law > Statutes (0.96)
- Government > Regional Government > North America Government > United States Government (0.35)
Risks and benefits of artificial intelligence - Dataconomy
We searched the risks and benefits of artificial intelligence and tried to decide is it evil or not? Humans have long desired to construct machines that can make decisions. It was thought of as a possibility that seemed too good to be true for a long time, and it was only seen in science-fiction films. Self-driving automobiles and apps like SIRI made this far-fetched fantasy come true. In 1950, John McCarthy coined the term "Artificial Intelligence." He added, "Every aspect of learning or any other feature of intelligence may in principle be so precisely described that a machine can be made to mimic it. It will be attempted to discover how to program machines to use language; create abstractions and concepts; solve problems that humans now only solve, and develop themselves."
- Information Technology (1.00)
- Transportation > Passenger (0.69)
- Transportation > Ground > Road (0.69)
Council Post: The Risks And Benefits Of AI In Medicine
Terence Mills, CEO of AI.io, a data science & engineering company that is delivering AI solutions in healthcare, travel, and entertainment. There is no denying that AI is revolutionizing health care. In the past two years, the investment in AI from health care organizations has been growing exponentially. According to research conducted by Deloitte, "75% of large organizations (annual revenue of over $10 billion) invested over $50 million in AI" in 2019. From more precision, more efficiency and taking pressure off overworked doctors and health care professionals, the benefits of AI in health care are numerous.
Risks and benefits of an AI revolution in medicine
Third in a series that taps the expertise of the Harvard community to examine the promise and potential pitfalls of the coming age of artificial intelligence and machine learning. The news is bad: "I'm sorry, but you have cancer." Those unwelcome words sink in for a few minutes, and then your doctor begins describing recent advances in artificial intelligence, advances that let her compare your case to the cases of every other patient who's ever had the same kind of cancer. She says she's found the most effective treatment, one best suited for the specific genetic subtype of the disease in someone with your genetic background -- truly personalized medicine. And the prognosis is good.
Report prepared by the Montreal AI Ethics Institute (MAIEI) for Publication Norms for Responsible AI by Partnership on AI
Gupta, Abhishek, Lanteigne, Camylle, Heath, Victoria
The history of science and technology shows that seemingly innocuous developments in scientific theories and research have enabled real-world applications with significant negative consequences for humanity. In order to ensure that the science and technology of AI is developed in a humane manner, we must develop research publication norms that are informed by our growing understanding of AI's potential threats and use cases. Unfortunately, it's difficult to create a set of publication norms for responsible AI because the field of AI is currently fragmented in terms of how this technology is researched, developed, funded, etc. To examine this challenge and find solutions, the Montreal AI Ethics Institute (MAIEI) collaborated with the Partnership on AI in May 2020 to host two public consultation meetups. These meetups examined potential publication norms for responsible AI, with the goal of creating a clear set of recommendations and ways forward for publishers. In its submission, MAIEI provides six initial recommendations, these include: 1) create tools to navigate publication decisions, 2) offer a page number extension, 3) develop a network of peers, 4) require broad impact statements, 5) require the publication of expected results, and 6) revamp the peer-review process. After considering potential concerns regarding these recommendations, including constraining innovation and creating a "black market" for AI research, MAIEI outlines three ways forward for publishers, these include: 1) state clearly and consistently the need for established norms, 2) coordinate and build trust as a community, and 3) change the approach.
- North America > Canada > Quebec > Montreal (0.62)
- North America > United States > New York (0.04)
- North America > United States > New Mexico > Bernalillo County > Albuquerque (0.04)
- (4 more...)
- Media (1.00)
- Information Technology > Security & Privacy (1.00)
- Government (1.00)
- (3 more...)